# DeBERTa Architecture
Ai Text Detector Academic V1.01
MIT
A fine-tuned AI-generated text detection model based on DeBERTa-v3-large, optimized for academic scenarios
Text Classification
Transformers English

A
desklib
255
3
Camembertav2 Base
MIT
CamemBERTav2 is a French language model pretrained on 275 billion French text tokens, utilizing the DebertaV2 architecture, and excels in multiple French NLP tasks.
Large Language Model
Transformers French

C
almanach
2,972
19
Mdeberta V3 Base Sentiment
MIT
A multilingual text sentiment classification model fine-tuned based on mDeBERTa-v3-base, supporting cross-language sentiment analysis
Text Classification
Safetensors Other
M
agentlans
101
1
Cyner 2.0 DeBERTa V3 Base
MIT
CyNER 2.0 is a named entity recognition model specifically designed for the cybersecurity domain, based on the DeBERTa architecture, capable of identifying various cybersecurity-related entities.
Sequence Labeling
Transformers English

C
PranavaKailash
164
2
Kf Deberta Base
MIT
KF-DeBERTa is a financial domain-specific language model jointly released by KakaoBank and FNGuid, built on the DeBERTa-v2 architecture, demonstrating excellent performance in both general and financial domain downstream tasks.
Large Language Model
Transformers Korean

K
kakaobank
783
46
Deberta V3 Japanese Xsmall
A DeBERTa V3 model trained on Japanese resources, optimized for Japanese, and does not rely on a morphological analyzer during inference
Large Language Model
Transformers Japanese

D
globis-university
96
4
Albertina 100m Portuguese Ptbr Encoder
MIT
Albertina 100M PTBR is a foundational large language model for Brazilian Portuguese, belonging to the BERT family of encoders, based on the Transformer neural network architecture, and developed on the DeBERTa model.
Large Language Model
Transformers Other

A
PORTULAN
131
7
Albertina 100m Portuguese Ptpt Encoder
MIT
Albertina 100M PTPT is a foundational large language model for European Portuguese (Portugal), belonging to the BERT family of encoders. It is based on the Transformer neural network architecture and developed upon the DeBERTa model.
Large Language Model
Transformers Other

A
PORTULAN
171
4
Deberta V2 Base Japanese Finetuned QAe
MIT
A Japanese Q&A model fine-tuned based on deberta-v2-base-japanese, using the DDQA dataset for fine-tuning, suitable for Q&A tasks.
Question Answering System
Transformers Japanese

D
Mizuiro-sakura
73
3
Deberta V2 Base Japanese
A Japanese DeBERTa V2 base model pretrained on Japanese Wikipedia, CC-100, and OSCAR corpora, suitable for masked language modeling and downstream task fine-tuning.
Large Language Model
Transformers Japanese

D
ku-nlp
38.93k
29
Deberta Base Japanese Wikipedia
DeBERTa(V2) model pretrained on Japanese Wikipedia and Aozora Bunko texts, suitable for Japanese text processing tasks
Large Language Model
Transformers Japanese

D
KoichiYasuoka
32
2
Deberta Base Finetuned Aqa Squad1
MIT
This model is a fine-tuned version of DeBERTa-base on the SQuAD question answering dataset, designed for automatic question answering tasks
Question Answering System
Transformers

D
stevemobs
14
0
Deberta Base Finetuned Aqa
MIT
A QA model fine-tuned on the adversarial_qa dataset based on microsoft/deberta-base
Question Answering System
Transformers

D
stevemobs
15
0
Deberta Base Finetuned Squad1 Aqa
MIT
This model is a question-answering model based on DeBERTa-base, fine-tuned on the SQuAD1 dataset and further fine-tuned on the adversarial_qa dataset.
Question Answering System
Transformers

D
stevemobs
15
0
Nbme Deberta Large
MIT
A model fine-tuned based on microsoft/deberta-large for specific task processing
Large Language Model
Transformers

N
smeoni
136
0
Nli Deberta V3 Large
Apache-2.0
A cross-encoder model based on microsoft/deberta-v3-large architecture, trained for natural language inference tasks on SNLI and MultiNLI datasets.
Text Classification
Transformers English

N
navteca
24
3
Deberta V3 Base Goemotions
MIT
A text sentiment classification model fine-tuned based on microsoft/deberta-v3-base, trained on an unknown dataset with an evaluated F1 score of 0.4468
Text Classification
Transformers

D
mrm8488
81
1
Deberta V3 Large Finetuned Mnli
MIT
DeBERTa-v3-large model fine-tuned on GLUE MNLI dataset for natural language inference tasks, achieving 90% accuracy on the validation set
Text Classification
Transformers English

D
mrm8488
31
2
Deberta V3 Large Sst2 Train 8 8
MIT
A sentiment analysis model fine-tuned on the SST-2 dataset based on microsoft/deberta-v3-large
Text Classification
Transformers

D
SetFit
17
0
Deberta V3 Large Sst2 Train 8 1
MIT
A text classification model fine-tuned based on microsoft/deberta-v3-large, trained on the SST-2 dataset
Text Classification
Transformers

D
SetFit
17
0
Deberta V3 Large Sst2 Train 8 9
MIT
A text classification model fine-tuned on the SST-2 dataset based on microsoft/deberta-v3-large
Text Classification
Transformers

D
SetFit
17
0
Deberta V3 Large Sst2 Train 16 4
MIT
A sentiment analysis model fine-tuned on the SST-2 dataset based on microsoft/deberta-v3-large
Text Classification
Transformers

D
SetFit
18
0
Featured Recommended AI Models